5 research outputs found

    Hyper Parameter Optimization for Transfer Learning of ShuffleNetV2 with Edge Computing for Casting Defect Detection

    Get PDF
    A casting defect is an expendable abnormality and the most undesirable thing in the metal casting process. In Casting Defect Detection, deep learning based on Convolution Neural Network (CNN) models has been widely used, but most of these models require a lot of processing power. This work proposes a low-power ShuffleNet V2-based Transfer Learning model for defect identification with low latency, easy upgrading, increased efficiency, and an automatic visual inspection system with edge computing. Initially, various image transformation techniques were used for data augmentation on casting datasets to test the model flexibility in diverse casting. Subsequently, a pre-trained lightweight ShuffleNetV2 model is adapted, and hyperparameters are fine-tuned to optimize the model. The work results in a lightweight, adaptive, and scalable model ideal for resource-constrained edge devices. Finally, the trained model can be used as an edge device on the NVIDIA Jetson Nano-kit to speed up detection. The measures of precision, recall, accuracy, and F1 score were utilized for model evaluation. According to the statistical measures, the model accuracy is 99.58%, precision is 100%, recall is 99%, and the F1-Score is 100 %

    Cascade Network Model to Detect Cognitive Impairment using Clock Drawing Test

    No full text
    1276-1284The Clock-Drawing Test (CDT) is commonly used to screen people for assessing cognitive impairment. Diagnoses are based on analyzing the specific features of clock drawing with pen and paper. The manual interpretations and understanding of the features are time-consuming, and test results highly depend on clinical experts' knowledge. Due to the impact of smart devices and advancements in deep learning algorithms, the necessity of a consistent and automatic screening system for cognitive impairment has amplified. This work proposed a simple, fast, low-cost, automated CDT screening technique. Initially, transferred deep convolution neural networks (ResNet152, EfficientNetB4, and DenseNet201) are used as feature extractors. The transfer learning technique makes it possible to experiment with existing models and build models much more quickly. Further, the extracted features are cascaded into a feature fusion layer to improve the quality of learning features, and the obtained feature vector become input for the classifier for classification. The performance of the model is experimentally evaluated and compared with the existing state-of-art models on a real dataset. Obtained results demonstrated that the Cascaded Network Model achieves high performance with an accuracy of 97.76%

    Hyper Parameter Optimization for Transfer Learning of ShuffleNetV2 with Edge Computing for Casting Defect Detection

    No full text
    171-177A casting defect is an expendable abnormality and the most undesirable thing in the metal casting process. In Casting Defect Detection, deep learning based on Convolution Neural Network (CNN) models has been widely used, but most of these models require a lot of processing power. This work proposes a low-power ShuffleNet V2-based Transfer Learning model for defect identification with low latency, easy upgrading, increased efficiency, and an automatic visual inspection system with edge computing. Initially, various image transformation techniques were used for data augmentation on casting datasets to test the model flexibility in diverse casting. Subsequently, a pre-trained lightweight ShuffleNetV2 model is adapted, and hyperparameters are fine-tuned to optimize the model. The work results in a lightweight, adaptive, and scalable model ideal for resource-constrained edge devices. Finally, the trained model can be used as an edge device on the NVIDIA Jetson Nano-kit to speed up detection. The measures of precision, recall, accuracy, and F1 score were utilized for model evaluation. According to the statistical measures, the model accuracy is 99.58%, precision is 100%, recall is 99%, and the F1-Score is 100 %
    corecore